# Long-context Processing
Phi 4 Reasoning
MIT
Phi-4 Reasoning is a cutting-edge open-weight reasoning model based on Phi-4, fine-tuned with supervised chain-of-thought trajectory datasets and trained via reinforcement learning, specializing in mathematics, science, and programming skills.
Large Language Model
Transformers Supports Multiple Languages

P
microsoft
11.31k
172
Modernbert Base Long Context Qe V1
Apache-2.0
A machine translation quality estimation model fine-tuned based on ModernBERT-base, supporting long-context document-level evaluation
Question Answering System
Transformers Supports Multiple Languages

M
ymoslem
987
5
Vi Qwen2 7B RAG
Apache-2.0
Vietnamese RAG-specific large language model fine-tuned from Qwen2-7B-Instruct, supports 8192-token context length
Large Language Model
Transformers Other

V
AITeamVN
737
15
Llava Next Mistral 7b 4096
A multimodal model fine-tuned based on LLaVA-v1.6-Mistral-7B, supporting joint understanding and generation of images and text
Text-to-Image
Transformers

L
Mantis-VL
40
2
Featured Recommended AI Models